Goto

Collaborating Authors

 knapsack constraint



6faf3b8ed0df532c14d0fc009e451b6d-Paper-Conference.pdf

Neural Information Processing Systems

We extend our results to the general case of maximizing a monotone submodular function subject to the intersection of a p-set system and multiple knapsack constraints. Finally, we evaluate the performance of our algorithms on multiple real-lifeapplications, includingmovierecommendation, locationsummarization, Twittertextsummarization,andvideosummarization.




Fast Adaptive Non-Monotone Submodular Maximization Subject to a Knapsack Constraint

Neural Information Processing Systems

Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern-day applications can render existing algorithms prohibitively slow. Moreover, frequently those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a $5.83$ approximation and runs in $O(n \log n)$ time, i.e., at least a factor $n$ faster than other state-of-the-art algorithms. The robustness of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a 9-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.


Submodular Maximization in Clean Linear Time

Neural Information Processing Systems

In this paper, we provide the first deterministic algorithm that achieves $1/2$-approximation for monotone submodular maximization subject to a knapsack constraint, while making a number of queries that scales only linearly with the size of the ground set $n$. Moreover, our result automatically paves the way for developing a linear-time deterministic algorithm that achieves the tight $1-1/e$ approximation guarantee for monotone submodular maximization under a cardinality (size) constraint. To complement our positive results, we also show strong information-theoretic lower bounds. More specifically, we show that when the maximum cardinality allowed for a solution is constant, no deterministic or randomized algorithm making a sub-linear number of function evaluations can guarantee any constant approximation ratio. Furthermore, when the constraint allows the selection of a constant fraction of the ground set, we show that any algorithm making fewer than $\Omega(n/\log(n))$ function evaluations cannot perform better than an algorithm that simply outputs a uniformly random subset of the ground set of the right size. We extend our results to the general case of maximizing a monotone submodular function subject to the intersection of a $p$-set system and multiple knapsack constraints. Finally, we evaluate the performance of our algorithms on multiple real-life applications, including movie recommendation, location summarization, Twitter text summarization, and video summarization.


Fast Approximation Algorithm for Non-Monotone DR-submodular Maximization under Size Constraint

Tran, Tan D., Pham, Canh V.

arXiv.org Artificial Intelligence

This work studies the non-monotone DR-submodular Maximization over a ground set of $n$ subject to a size constraint $k$. We propose two approximation algorithms for solving this problem named FastDrSub and FastDrSub++. FastDrSub offers an approximation ratio of $0.044$ with query complexity of $O(n \log(k))$. The second one, FastDrSub++, improves upon it with a ratio of $1/4-ε$ within query complexity of $(n \log k)$ for an input parameter $ε>0$. Therefore, our proposed algorithms are the first constant-ratio approximation algorithms for the problem with the low complexity of $O(n \log(k))$. Additionally, both algorithms are experimentally evaluated and compared against existing state-of-the-art methods, demonstrating their effectiveness in solving the Revenue Maximization problem with DR-submodular objective function. The experimental results show that our proposed algorithms significantly outperform existing approaches in terms of both query complexity and solution quality.


A Proof of Theorem 4.1

Neural Information Processing Systems

In this appendix, we provide the Proof of Theorem 4.1 which is omitted from the main text due to the space limitation. We should point out that the choice of our potential function works best for a combination of k matroids and null = k knapsacks. When the number of matroid and knapsack constraints is not equal, we can always add redundant constraints so that k is the maximum of the two numbers. For this reason, in the rest of this proof, we assume null = k . Then, we always have φ (S a) φ (S) with γ (S a) < 1. Proof.


Submodular Maximization Through Barrier Functions

Neural Information Processing Systems

In this paper, we introduce a novel technique for constrained submodular maximization, inspired by barrier functions in continuous optimization. This connection not only improves the running time for constrained submodular maximization but also provides the state of the art guarantee. More precisely, for maximizing a monotone submodular function subject to the combination of a k -matchoid and null -knapsack constraints (for null k), we propose a potential function that can be approximately minimized. Once we minimize the potential function up to an ε error, it is guaranteed that we have found a feasible set with a 2(k +1+ ε)-approximation factor which can indeed be further improved to ( k +1+ ε) by an enumeration technique. We extensively evaluate the performance of our proposed algorithm over several real-world applications, including a movie recommendation system, summarization tasks for Y ouTube videos, Twitter feeds and Y elp business locations, and a set cover problem.